36 research outputs found

    Reasoning algebraically about refinement on TSO architectures

    Get PDF
    The Total Store Order memory model is widely implemented by modern multicore architectures such as x86, where local buffers are used for optimisation, allowing limited forms of instruction reordering. The presence of buffers and hardware-controlled buffer flushes increases the level of non-determinism from the level specified by a program, complicating the already difficult task of concurrent programming. This paper presents a new notion of refinement for weak memory models, based on the observation that pending writes to a process' local variables may be treated as if the effect of the update has already occurred in shared memory. We develop an interval-based model with algebraic rules for various programming constructs. In this framework, several decomposition rules for our new notion of refinement are developed. We apply our approach to verify the spinlock algorithm from the literature

    Bounded Model Checking of Concurrent Data Types on Relaxed Memory Models: A Case Study

    Get PDF
    Many multithreaded programs employ concurrent data types to safely share data among threads. However, highly-concurrent algorithms for even seemingly simple data types are difficult to implement correctly, especially when considering the relaxed memory ordering models commonly employed by today’s multiprocessors. The formal verification of such implementations is challenging as well because the high degree of concurrency leads to a large number of possible executions. In this case study, we develop a SAT-based bounded verification method and apply it to a representative example, a well-known two-lock concurrent queue algorithm. We first formulate a correctness criterion that specifically targets failures caused by concurrency; it demands that all concurrent executions be observationally equivalent to some serial execution. Next, we define a relaxed memory model that conservatively approximates several common shared-memory multiprocessors. Using commit point specifications, a suite of finite symbolic tests, a prototype encoder, and a standard SAT solver, we successfully identify two failures of a naive implementation that can be observed only under relaxed memory models. We eliminate these failures by inserting appropriate memory ordering fences into the code. The experiments confirm that our approach provides a valuable aid for desigining and implementing concurrent data types

    Software Verification for Weak Memory via Program Transformation

    Get PDF
    Despite multiprocessors implementing weak memory models, verification methods often assume Sequential Consistency (SC), thus may miss bugs due to weak memory. We propose a sound transformation of the program to verify, enabling SC tools to perform verification w.r.t. weak memory. We present experiments for a broad variety of models (from x86/TSO to Power/ARM) and a vast range of verification tools, quantify the additional cost of the transformation and highlight the cases when we can drastically reduce it. Our benchmarks include work-queue management code from PostgreSQL

    The Effect of Contention on the Scalability of Page-Based Software Shared Memory Systems

    No full text

    Shared memory consistency models: a tutorial

    No full text

    An Operational Model for Multiprocessors with Caches

    No full text

    Configuration Method of Multiple Clusters for the Computational Grid

    No full text

    Lifetime Reliability: Toward an Architectural Solution

    No full text

    Memory-model-sensitive data race analysis

    No full text
    We present a “memory-model-sensitive” approach to validating correctness properties for multithreaded programs. Our key insight is that by specifying both the inter-thread memory consistency model and the intra-thread program semantics as constraints, a program verification task can be reduced to an equivalent constraint solving problem, thus allowing an exhaustive examination of all thread interleavings precisely allowed by a given memory model. To demonstrate, this paper formalizes race conditions according to the new Java memory model, for a simplified but non-trivial source language. We then describe the implementation of a memory-model-sensitive race detector using constraint logic programming (CLP). In comparison with conventional program analysis, our approach does not offer the same kind of performance and scalability due to the complexity involved in exact formal reasoning. However, we show that a formal semantics can serve more than documentation purposes — it can be applied as a sound basis for rigorous property checking, upon which more scalable methods can be derived

    GRACE-1: cross-layer adaptation for multimedia quality and battery energy

    No full text
    corecore